About the project


RStudio Excercise 2; data analysis

This week I have learned a lot about data wrangling in the excellent course on Data Camp. I am still quite slow but hope I will become faster before the end of this course!

The data

The data set I will use for this exercise is a subset of a data set, collected from an international survey of approachess to learning. For this subset, I have picked a few background variables (gender, age, attitude(towards statistics) and points(exam points)), and mutated three new variables based on multiple questions conserning deep learning, surface learning and strategic learning (variables deep, surf and stra). Observations where the student did not get any exam points are not included. Thus, the data set I will use for this exercise containes 7 variables and 166 observations.

# reading in the data
learning2014 <- read.csv("~/Dropbox/Abortforskning/opendatascience/IODS_project/data/learning2014.csv", header=TRUE, sep=",")
str(learning2014)
## 'data.frame':    166 obs. of  7 variables:
##  $ gender  : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
##  $ Age     : int  53 55 49 53 49 38 50 37 37 42 ...
##  $ Attitude: int  37 31 25 35 37 38 35 29 38 21 ...
##  $ deep    : num  3.58 2.92 3.5 3.5 3.67 ...
##  $ stra    : num  3.38 2.75 3.62 3.12 3.62 ...
##  $ surf    : num  2.58 3.17 2.25 2.25 2.83 ...
##  $ Points  : int  25 12 24 10 22 21 21 31 24 26 ...
dim(learning2014)
## [1] 166   7

Graphical overview and descriptive statistics

Below is an overview of the data by gender of the student (Females are in pink - I am sorry for the heteronormative approach of RStudio).

#libraries
library(ggplot2)
library(GGally)

p <- ggpairs(learning2014, mapping = aes(col=gender, alpha=0.3), lower = list(combo = wrap("facethist", bins = 20)))

p

Next follows the summary statistics of the data and its variables.

summary(learning2014)
##  gender       Age           Attitude          deep            stra      
##  F:110   Min.   :17.00   Min.   :14.00   Min.   :1.583   Min.   :1.250  
##  M: 56   1st Qu.:21.00   1st Qu.:26.00   1st Qu.:3.333   1st Qu.:2.625  
##          Median :22.00   Median :32.00   Median :3.667   Median :3.188  
##          Mean   :25.51   Mean   :31.43   Mean   :3.680   Mean   :3.121  
##          3rd Qu.:27.00   3rd Qu.:37.00   3rd Qu.:4.083   3rd Qu.:3.625  
##          Max.   :55.00   Max.   :50.00   Max.   :4.917   Max.   :5.000  
##       surf           Points     
##  Min.   :1.583   Min.   : 7.00  
##  1st Qu.:2.417   1st Qu.:19.00  
##  Median :2.833   Median :23.00  
##  Mean   :2.787   Mean   :22.72  
##  3rd Qu.:3.167   3rd Qu.:27.75  
##  Max.   :4.333   Max.   :33.00

So, the mean age of students is 25.5, with a range from 17 to 55. Attitude-mean is 31.4, ranging from 14 to 50. Mean points are 22.7 (range 7 to to 33).

Not surprsisingly, attitudes towards statistics is positively correlated to the points of the exam, among both males and females. More correaltions can be read from the graphic overview.

Regression model

Let us model a linear regression model with exam points as the dependent variable. I choose to include three independent variables; attitude, age and gender, hypothesizing that in addition to attitude, also the students age and gender would affect the students scoring in the test.

my_model <- lm(Points ~ Attitude + Age + gender, data = learning2014)
summary(my_model)
## 
## Call:
## lm(formula = Points ~ Attitude + Age + gender, data = learning2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.4590  -3.3221   0.2186   4.0247  10.4632 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 13.42910    2.29043   5.863 2.48e-08 ***
## Attitude     0.36066    0.05932   6.080 8.34e-09 ***
## Age         -0.07586    0.05367  -1.414    0.159    
## genderM     -0.33054    0.91934  -0.360    0.720    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.315 on 162 degrees of freedom
## Multiple R-squared:  0.2018, Adjusted R-squared:  0.187 
## F-statistic: 13.65 on 3 and 162 DF,  p-value: 5.536e-08

I was right about attitude, as seen earlier it correlates positively and significantly with the points of the exam, with a beta-coefficient of 0.36, i.e 0.36 points more in the exam for every one-unit increase in attitude, and a extremely small p-value.

On the contrary; gender and age has no correlation with the exam points, with p-values of 0.72 and 0.16 respectively.

I will next adjust the model by leaving out the unsignificant covariates, leaving attitude as the only explanatory variable.

my_model.2 <- lm(Points ~ Attitude, data = learning2014)
summary(my_model.2)
## 
## Call:
## lm(formula = Points ~ Attitude, data = learning2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -16.9763  -3.2119   0.4339   4.1534  10.6645 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 11.63715    1.83035   6.358 1.95e-09 ***
## Attitude     0.35255    0.05674   6.214 4.12e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared:  0.1906, Adjusted R-squared:  0.1856 
## F-statistic: 38.61 on 1 and 164 DF,  p-value: 4.119e-09

In this model wiht attitude as the only explanatory variable, for every one-unit increase in attitude towards statistics, the exampoints will increase by 0.35 and this is a statistically significant result.

This linear model describes about 19 % of the variability of the observations, which is readeble from the multiple R-squared.

Diagnostic plots

Finally, I will produce som diagnostic plots. Looking at the residuals can tell me how well (or poorly) my model represents my data. Below are three commonly used diagnostic plots. I) The Residual vs Fitted. This plot could show if the residulas have non-linear patterns. Looking at the Residuals vs fitted plot does not show any sign of distinct pattern among the observations. Hence, it is reasonable to assume that the residuals have a constant variance which is one of the main assumptions for linear regression. II) The QQ-plot. This plot shows if the residuals are normally distributted (another of the key assumption). All residuals are neatly in a straight line showing that they are normaaly distributed. III) The Resuduals vs Leverage plot. This plot helps us find any influental cases, that could potentially impact the results. Such outliers are not always a bad thing, nut it is important to identify them, and sometimes even repeat the analysis after the exclusion of these outliers. In this plot, there are no evident outliers, both the upper-right and lower-right corners seem empty ande hence, outliers are not a problem in this model.

par(mfrow = c(2,2)) 
plot(my_model.2, which=c(1,2,5))

The End

I hope you enjoyed reading this short summary of my work this week, I sure enjoyed writing it :)

Have a good one!


# Chapter 3

title: “chapter3.Rmd” author: “Frida Gyllenberg” date: “22 Nov 2017” output: html_document —

## Loading tidyverse: tibble
## Loading tidyverse: tidyr
## Loading tidyverse: readr
## Loading tidyverse: purrr
## Loading tidyverse: dplyr
## Conflicts with tidy packages ----------------------------------------------
## filter(): dplyr, stats
## lag():    dplyr, stats
## 
## Attaching package: 'gridExtra'
## The following object is masked from 'package:dplyr':
## 
##     combine

The data

The data set I will use for this exercise is a combination of two data sets on student achievement in secondary education of two Portuguese schools. Data were collected by using school reports and questionnaires. THe two data sets regards maths and Portugese. In the combination of the two data set variables not used for joining have been combined by averaging. Two new variables have been computed;‘alc_use’ is the average of ‘Dalc’ and ‘Walc’ and ‘high_use’ is TRUE if ‘alc_use’ is higher than 2 and FALSE otherwise.

The data containes 382 observations from the following 13 variables:

alc <- read.csv("~/Dropbox/Abortforskning/opendatascience/IODS_project/data/alc.csv", header = TRUE, sep = ",")
dim(alc)
## [1] 382  35
colnames(alc)
##  [1] "school"     "sex"        "age"        "address"    "famsize"   
##  [6] "Pstatus"    "Medu"       "Fedu"       "Mjob"       "Fjob"      
## [11] "reason"     "nursery"    "internet"   "guardian"   "traveltime"
## [16] "studytime"  "failures"   "schoolsup"  "famsup"     "paid"      
## [21] "activities" "higher"     "romantic"   "famrel"     "freetime"  
## [26] "goout"      "Dalc"       "Walc"       "health"     "absences"  
## [31] "G1"         "G2"         "G3"         "alc_use"    "high_use"

Hypothesis regarding high use of alcohol

I choose to look at four different variables and to investigate their relationship with high use of alcohol with the following hypthesis; i) sex: I assume high use of alcohol is more prevalent among men ii) absences: my hypotheisis is that high use is associated with more absence from school iii) activities: high use of alcohol could be related to less extra-curricular actyivities iv) G3 i.e. Final grade: high use during school could resoult in lower grades.

Exploring my chosen variables numerically and graphically

I start by plotting the two numeric variables in a box plot. Regarding grades, the mean grade is lower ammong students woth high alcohol use. Student absences on the other hand are more frequent among students with high use of alcohol.

g1 <- ggplot(alc, aes(x = high_use, y = G3))+ geom_boxplot() + ggtitle("Grade")
g2 <- ggplot(alc, aes(x=high_use, y=absences)) + geom_boxplot()+ggtitle("Student absences")
grid.arrange(g1, g2, nrow=1, ncol=2)

For the relation of gender and extra-curricular activities, both factors with two levels, I choose to explore them in cross-tabulations. High use of alcohol was more frequent among men than women, as almost 40% of male students (72/182) vs. 20% of females are classified as high users. Regarding extra-curricular activities, there is o clear dirfference; out of 114 high users, 55 (48%) have an activity whereas out of 268 non-high user 146 (54 %) have an activity.

print("Cross tabulation of alcohol use by sex")
## [1] "Cross tabulation of alcohol use by sex"
table(alc$high_use, alc$sex)
##        
##           F   M
##   FALSE 156 112
##   TRUE   42  72
print("Cross tabulation of alcohol use by activities")
## [1] "Cross tabulation of alcohol use by activities"
table(alc$high_use, alc$activities)
##        
##          no yes
##   FALSE 122 146
##   TRUE   59  55

Logistic regression

modeling a logistic regression on the target variable high_use and my four chosen variables as explanatory variables. Here are the Odds Ratio (i.e. exponent of the model estimates ) with 95% confidence intervals

m <- glm(high_use ~ absences + sex + activities + G3, data = alc, family = "binomial")
cbind(coef(m), confint(m))%>% exp %>% round(4)
## Waiting for profiling to be done...
##                       2.5 % 97.5 %
## (Intercept)   0.4198 0.1661 1.0379
## absences      1.0975 1.0513 1.1505
## sexM          2.7800 1.7340 4.5234
## activitiesyes 0.7138 0.4444 1.1421
## G3            0.9315 0.8675 0.9995

The OR for absences is 1.10, i.e. for every unit of abscense the odds of being a high user is 10 % higher. Compared to females, the odds are 2.8-fold for males to be high users. Regarding activities, the result is insignificant, and for the grade there is maybe some relation to lower grades as the OR is 0.93, but UCI is approaching 1 (0.9995)

Predictive power of the model

According to my model only the variables absences and sex had a statistical relationship with high use of alcohol. I now assess the power of my model to predict in a 2x2 table of predicted vs true high use and also in a plot.

# predict() the probability of high_use
probabilities <- predict(m, type = "response")

# add the predicted probabilities to 'alc'
alc <- mutate(alc, probability = probabilities)

# use the probabilities to make a prediction of high_use
alc <- mutate(alc, prediction = probability>0.5)

# tabulate the target variable versus the predictions
table(high_use = alc$high_use, prediction=alc$prediction)
##         prediction
## high_use FALSE TRUE
##    FALSE   256   12
##    TRUE     87   27
# initialize a plot of 'high_use' versus 'probability' in 'alc'
ggplot(alc, aes(x = probability, y = high_use, col=prediction)) +
  geom_point()

The the model found accurately 256 out of 268 non-high-users, but only 27 out of 114 true high-users. So, it is not a very good predictive model…

This is it for tonight, sorry no bonuses for me:)


# Chapter 4

title: “chapter4” author: “Frida Gyllenberg” date: “29 Nov 2017” output: html_document —

The data

The data set I will use for this exercise is a data set on housing values in the surburbs of Boston. There is 506 observations of the following 14 variables:

CRIM - per capita crime rate by town ZN - proportion of residential land zoned for lots over 25,000 sq.ft. INDUS - proportion of non-retail business acres per town. CHAS - Charles River dummy variable (1 if tract bounds river; 0 otherwise) NOX - nitric oxides concentration (parts per 10 million) RM - average number of rooms per dwelling AGE - proportion of owner-occupied units built prior to 1940 DIS - weighted distances to five Boston employment centres RAD - index of accessibility to radial highways TAX - full-value property-tax rate per $10,000 PTRATIO - pupil-teacher ratio by town BLACK - 1000(Bk - 0.63)^2 where Bk isthe proportion of blacks by town LSTAT - % lower status of the population MEDV - Median value of owner-occupied homes in $1000’s

getwd()
## [1] "/Users/Frida/Dropbox/Abortforskning/opendatascience/IODS_project"
dim(Boston)
## [1] 506  14
glimpse(Boston)
## Observations: 506
## Variables: 14
## $ crim    <dbl> 0.00632, 0.02731, 0.02729, 0.03237, 0.06905, 0.02985, ...
## $ zn      <dbl> 18.0, 0.0, 0.0, 0.0, 0.0, 0.0, 12.5, 12.5, 12.5, 12.5,...
## $ indus   <dbl> 2.31, 7.07, 7.07, 2.18, 2.18, 2.18, 7.87, 7.87, 7.87, ...
## $ chas    <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
## $ nox     <dbl> 0.538, 0.469, 0.469, 0.458, 0.458, 0.458, 0.524, 0.524...
## $ rm      <dbl> 6.575, 6.421, 7.185, 6.998, 7.147, 6.430, 6.012, 6.172...
## $ age     <dbl> 65.2, 78.9, 61.1, 45.8, 54.2, 58.7, 66.6, 96.1, 100.0,...
## $ dis     <dbl> 4.0900, 4.9671, 4.9671, 6.0622, 6.0622, 6.0622, 5.5605...
## $ rad     <int> 1, 2, 2, 3, 3, 3, 5, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4, 4, ...
## $ tax     <dbl> 296, 242, 242, 222, 222, 222, 311, 311, 311, 311, 311,...
## $ ptratio <dbl> 15.3, 17.8, 17.8, 18.7, 18.7, 18.7, 15.2, 15.2, 15.2, ...
## $ black   <dbl> 396.90, 396.90, 392.83, 394.63, 396.90, 394.12, 395.60...
## $ lstat   <dbl> 4.98, 9.14, 4.03, 2.94, 5.33, 5.21, 12.43, 19.15, 29.9...
## $ medv    <dbl> 24.0, 21.6, 34.7, 33.4, 36.2, 28.7, 22.9, 27.1, 16.5, ...
colnames(Boston)
##  [1] "crim"    "zn"      "indus"   "chas"    "nox"     "rm"      "age"    
##  [8] "dis"     "rad"     "tax"     "ptratio" "black"   "lstat"   "medv"

summary of the variables

Here is the summary of all the variables. THere is large variance on crime rates, with a fem locations with high crime that drives the mean up, but the median crime rate per capita is much lower (mean = 3.6, median=0.3). THe same is seen on the land areal variable zn.

summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08204   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

Graphical overview of the data

First, with the pairs function I display pairwise scatterplots of the predictors in the data set. As this is difficult to read I continue with boxplots of a few chosen variables.

##Graphical Overview over correlations within the data

A corrplot of the variables in the data set is a nice way to get information on how the variables of the data are correlated. (Positive correlations are displayed in blue and negative correlations in red color, (Color intensity and the size of the circle are proportional to the correlation coefficients.)

## Standardising the data set

# center and standardize variables
boston_scaled <- scale(Boston)

# summaries of the scaled variables
summary(boston_scaled)
##       crim                 zn               indus        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202  
##       chas              nox                rm               age         
##  Min.   :-0.2723   Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331  
##  1st Qu.:-0.2723   1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366  
##  Median :-0.2723   Median :-0.1441   Median :-0.1084   Median : 0.3171  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.:-0.2723   3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059  
##  Max.   : 3.6648   Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164  
##       dis               rad               tax             ptratio       
##  Min.   :-1.2658   Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047  
##  1st Qu.:-0.8049   1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876  
##  Median :-0.2790   Median :-0.5225   Median :-0.4642   Median : 0.2746  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6617   3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058  
##  Max.   : 3.9566   Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372  
##      black             lstat              medv        
##  Min.   :-3.9033   Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.: 0.2049   1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median : 0.3808   Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.4332   3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 0.4406   Max.   : 3.5453   Max.   : 2.9865
# change the object to data frame
boston_scaled <- as.data.frame(boston_scaled)

The scale() function subtracts the column means from the corresponding columns and divides the difference with standard deviation. Hence, all means are 0 in teh scaled dataset.

Categorical crime rate variable.

# summary of the scaled crime rate
summary(boston_scaled$crim)
##      Min.   1st Qu.    Median      Mean   3rd Qu.      Max. 
## -0.419400 -0.410600 -0.390300  0.000000  0.007389  9.924000
# creating a quantile vector of crim 
bins <- quantile(boston_scaled$crim)
bins
##           0%          25%          50%          75%         100% 
## -0.419366929 -0.410563278 -0.390280295  0.007389247  9.924109610
# creating a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks=bins, include.lowest = TRUE, label= c("low", "med_low", "med_high", "high"))

table(crime)
## crime
##      low  med_low med_high     high 
##      127      126      126      127
# remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)

# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)

Dividing the dataset to train and test sets

# number of rows in the Boston dataset 
n <- nrow(boston_scaled)

# choose randomly 80% of the rows
ind <- sample(n,  size = n * 0.8)

# create train set with only those 80% random rows
train <- boston_scaled[ind,]

# create test set excluding those rows in the train set
test <- boston_scaled[-ind,]

LDA analysis on the train set and drawing the plot

# linear discriminant analysis
lda.fit <- lda(crime ~., data = train)

# print the lda.fit object
lda.fit
## Call:
## lda(crime ~ ., data = train)
## 
## Prior probabilities of groups:
##       low   med_low  med_high      high 
## 0.2722772 0.2500000 0.2252475 0.2524752 
## 
## Group means:
##                   zn      indus          chas        nox          rm
## low       1.00819740 -0.9538015 -0.0933699661 -0.8899476  0.47729624
## med_low  -0.05549714 -0.3693552  0.0005392655 -0.5880476 -0.10957721
## med_high -0.37227292  0.2684691  0.2901138231  0.4584931 -0.01976487
## high     -0.48724019  1.0171096 -0.0407349362  1.0812707 -0.49527304
##                 age        dis        rad        tax    ptratio
## low      -0.9117886  0.8660818 -0.6916223 -0.7236111 -0.4711800
## med_low  -0.4093741  0.3889527 -0.5497747 -0.4994024 -0.1162030
## med_high  0.4333648 -0.3924488 -0.4114238 -0.2779954 -0.3352802
## high      0.8099307 -0.8548369  1.6382099  1.5141140  0.7808718
##                black      lstat        medv
## low       0.37751765 -0.7813671  0.57141633
## med_low   0.31756992 -0.1646445  0.02517642
## med_high  0.05397956  0.1246948  0.11639429
## high     -0.73652654  0.9134106 -0.71297618
## 
## Coefficients of linear discriminants:
##                 LD1           LD2         LD3
## zn       0.10412277  6.098516e-01 -0.95691925
## indus    0.04944465 -5.690786e-01  0.26234280
## chas    -0.01154981 -4.582238e-02  0.05382582
## nox      0.44379995 -6.480301e-01 -1.33241700
## rm       0.03233195 -3.275566e-06 -0.16726026
## age      0.23788676 -3.289875e-01 -0.10492049
## dis     -0.06934622 -3.845012e-01  0.20294638
## rad      3.10416506  8.394852e-01  0.18056024
## tax      0.01130406  2.396288e-01  0.31279334
## ptratio  0.14305902  5.487981e-02 -0.29613288
## black   -0.11051231  2.851557e-02  0.12297983
## lstat    0.20988887 -2.592076e-01  0.28278809
## medv     0.10839554 -4.049106e-01 -0.23944335
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.9480 0.0401 0.0118
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

# target classes as numeric
classes <- as.numeric(train$crime)

# plot the lda results
plot(lda.fit, dimen = 2, col=classes, pch=classes)
lda.arrows(lda.fit, myscale = 1)

##Prediction

#saving the crime categories from the test set
crime.correct <- test$crime

#remove crime variable from test dataset

test2 <- dplyr::select(test, -crime)
colnames(test) # with crime
##  [1] "zn"      "indus"   "chas"    "nox"     "rm"      "age"     "dis"    
##  [8] "rad"     "tax"     "ptratio" "black"   "lstat"   "medv"    "crime"
test$crime #categorical
##   [1] low      low      med_high med_high med_high med_high med_high
##   [8] med_high med_high low      med_low  low      low      med_low 
##  [15] med_low  med_low  low      low      med_low  med_low  med_low 
##  [22] med_low  med_low  med_low  low      med_low  med_low  med_high
##  [29] med_high med_high med_low  med_high med_high med_high med_high
##  [36] med_high med_high med_low  low      med_low  med_low  med_low 
##  [43] med_high med_high med_high med_high med_high med_high med_high
##  [50] med_high med_high med_low  med_low  med_low  med_low  med_low 
##  [57] low      med_high med_high med_high low      low      low     
##  [64] med_low  low      med_high med_high med_high med_high med_high
##  [71] low      low      low      high     med_high high     high    
##  [78] high     high     high     high     high     high     high    
##  [85] high     high     high     high     high     high     high    
##  [92] high     high     high     high     high     high     high    
##  [99] high     med_low  med_low  med_high
## Levels: low med_low med_high high
colnames(test2) # without crime
##  [1] "zn"      "indus"   "chas"    "nox"     "rm"      "age"     "dis"    
##  [8] "rad"     "tax"     "ptratio" "black"   "lstat"   "medv"
# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)

# cross tabulate the results
table(correct = crime.correct, predicted = lda.pred$class)
##           predicted
## correct    low med_low med_high high
##   low        6      10        1    0
##   med_low    1      19        5    0
##   med_high   1      12       21    1
##   high       0       0        0   25

The prediction works well, especially on med_high and high crime rates.

Reloading, standardizing and plotting again

This part was not clear to me, wonder if I got it right…

library(MASS)
View(Boston)
## Warning: running command ''/usr/bin/otool' -L '/Library/Frameworks/
## R.framework/Resources/modules/R_de.so'' had status 1
#standardize the data set
Boston_scaled2 <- scale(Boston)

# calculating the euclidean distance between obesrvations, the most common distance measure
dist_eu <- dist(Boston_scaled2)

# look at the summary of the distances
summary(dist_eu)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4620  4.8240  4.9110  6.1860 14.4000
# k-means clustering
km <-kmeans(Boston, centers = 3)

# trying different numbers of clusters (1:3)
km <-kmeans(Boston, centers = 3)
summary(km)
##              Length Class  Mode   
## cluster      506    -none- numeric
## centers       42    -none- numeric
## totss          1    -none- numeric
## withinss       3    -none- numeric
## tot.withinss   1    -none- numeric
## betweenss      1    -none- numeric
## size           3    -none- numeric
## iter           1    -none- numeric
## ifault         1    -none- numeric
# plot the Boston dataset with clusters
pairs(Boston, col=km$cluster)

***

# Chapter 5

title: “chapter5” author: “Frida Gyllenberg” date: “5 Dec 2017” output: html_document —

#libraries
library(tidyverse)
library(ggplot2)
library(GGally)
library(corrplot)
library(stats)
library(FactoMineR)

The human data

We are using a subset of the ‘human’ data, gathered by the United Nations Development Programme, with the following 8 variables: Edu2.FM - Ratio of females by males of proportion with at least second degree education. Labo.FM - Ratio of females by males of proportion in the labour force. Edu.Exp - Expected years of schooling Life.Exp - Life expectancy at birth GNI - Gross National Income per capita
Mat.Mor - Maternal mortality ratio Ado.Birth - Adolescent birth rate Parli.F - Percetange of female representatives in parliament

This data set has 155 observation of the 8 variables described above. Every observation equals one country. Data for the data set has been obtained from multiple national data regsiters.

#reading data

human <- read.table("http://s3.amazonaws.com/assets.datacamp.com/production/course_2218/datasets/human2.txt", sep  =",", header = T)
dim(human)
## [1] 155   8
str(human)
## 'data.frame':    155 obs. of  8 variables:
##  $ Edu2.FM  : num  1.007 0.997 0.983 0.989 0.969 ...
##  $ Labo.FM  : num  0.891 0.819 0.825 0.884 0.829 ...
##  $ Edu.Exp  : num  17.5 20.2 15.8 18.7 17.9 16.5 18.6 16.5 15.9 19.2 ...
##  $ Life.Exp : num  81.6 82.4 83 80.2 81.6 80.9 80.9 79.1 82 81.8 ...
##  $ GNI      : int  64992 42261 56431 44025 45435 43919 39568 52947 42155 32689 ...
##  $ Mat.Mor  : int  4 6 6 5 6 7 9 28 11 8 ...
##  $ Ado.Birth: num  7.8 12.1 1.9 5.1 6.2 3.8 8.2 31 14.5 25.3 ...
##  $ Parli.F  : num  39.6 30.5 28.5 38 36.9 36.9 19.9 19.4 28.2 31.4 ...
summary(human)
##     Edu2.FM          Labo.FM          Edu.Exp         Life.Exp    
##  Min.   :0.1717   Min.   :0.1857   Min.   : 5.40   Min.   :49.00  
##  1st Qu.:0.7264   1st Qu.:0.5984   1st Qu.:11.25   1st Qu.:66.30  
##  Median :0.9375   Median :0.7535   Median :13.50   Median :74.20  
##  Mean   :0.8529   Mean   :0.7074   Mean   :13.18   Mean   :71.65  
##  3rd Qu.:0.9968   3rd Qu.:0.8535   3rd Qu.:15.20   3rd Qu.:77.25  
##  Max.   :1.4967   Max.   :1.0380   Max.   :20.20   Max.   :83.50  
##       GNI            Mat.Mor         Ado.Birth         Parli.F     
##  Min.   :   581   Min.   :   1.0   Min.   :  0.60   Min.   : 0.00  
##  1st Qu.:  4198   1st Qu.:  11.5   1st Qu.: 12.65   1st Qu.:12.40  
##  Median : 12040   Median :  49.0   Median : 33.60   Median :19.30  
##  Mean   : 17628   Mean   : 149.1   Mean   : 47.16   Mean   :20.91  
##  3rd Qu.: 24512   3rd Qu.: 190.0   3rd Qu.: 71.95   3rd Qu.:27.95  
##  Max.   :123124   Max.   :1100.0   Max.   :204.80   Max.   :57.50

Graphic overview

Below is a corrplot of the data set and a ggpairs plot. In the corrplot positive correlations are displayed in blue and negative correlations in red color. Color intensity and the size of the circle are proportional to the correlation coefficients.

There is strong possitive correlation between eg maternal mortality and life expectancy, negative correlation between expected education and life expectancy. When looking at the correlations one need to remember the possibility of a confounding factor not measured here, eg low socioeconomic status.

# corrplot
ggpairs(human)

M <- cor(human)
corrplot(M, method = "circle")

Principal components analysis of non-standardized variables

PCA is used to bring out strong patterns in a data set and is often used make data easy to explore and visualize. Starting out with a PCA of the non-standardized data set.

# perform principal component analysis (with the SVD method)
pca_human <- prcomp(human)

# draw a biplot of the principal component representation and the original variables
biplot(pca_human, choices = 1:2, cex=c(0.8,1), col=c("grey40", "deeppink2"))
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length
## = arrow.len): zero-length arrow is of indeterminate angle and so skipped

Litterally, there is only one arrow left, the GNI variable. Because the data is not standardized, the GNI variable scale is totally different from the other variables.

Principal components analysis of standardized variables

Next repeating the same analysis, but with standardized varables.

# standardize the variables
human_std <- scale(human)
# print out summaries of the standardized variable
summary(human_std)
##     Edu2.FM           Labo.FM           Edu.Exp           Life.Exp      
##  Min.   :-2.8189   Min.   :-2.6247   Min.   :-2.7378   Min.   :-2.7188  
##  1st Qu.:-0.5233   1st Qu.:-0.5484   1st Qu.:-0.6782   1st Qu.:-0.6425  
##  Median : 0.3503   Median : 0.2316   Median : 0.1140   Median : 0.3056  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5958   3rd Qu.: 0.7350   3rd Qu.: 0.7126   3rd Qu.: 0.6717  
##  Max.   : 2.6646   Max.   : 1.6632   Max.   : 2.4730   Max.   : 1.4218  
##       GNI             Mat.Mor          Ado.Birth          Parli.F       
##  Min.   :-0.9193   Min.   :-0.6992   Min.   :-1.1325   Min.   :-1.8203  
##  1st Qu.:-0.7243   1st Qu.:-0.6496   1st Qu.:-0.8394   1st Qu.:-0.7409  
##  Median :-0.3013   Median :-0.4726   Median :-0.3298   Median :-0.1403  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.3712   3rd Qu.: 0.1932   3rd Qu.: 0.6030   3rd Qu.: 0.6127  
##  Max.   : 5.6890   Max.   : 4.4899   Max.   : 3.8344   Max.   : 3.1850
# perform principal component analysis (with the SVD method)
pca_human <- prcomp(human_std)
# draw a biplot of the principal component representation and the original variables
biplot(pca_human, choices = 1:2, cex=c(0.8,1), col=c("grey40", "deeppink2"))

Now we see some more results than in the previous plot. A summary of the model shows the variance:

s <- summary(pca_human)
s
## Importance of components:
##                           PC1    PC2     PC3     PC4     PC5     PC6
## Standard deviation     2.0708 1.1397 0.87505 0.77886 0.66196 0.53631
## Proportion of Variance 0.5361 0.1624 0.09571 0.07583 0.05477 0.03595
## Cumulative Proportion  0.5361 0.6984 0.79413 0.86996 0.92473 0.96069
##                            PC7     PC8
## Standard deviation     0.45900 0.32224
## Proportion of Variance 0.02634 0.01298
## Cumulative Proportion  0.98702 1.00000

The first principal component PC1 is about GNI and life-expectancy-related phenomena; maternal mortality and adolescent births in one directiong and GNI and life expectancy to the other.
PC2 describes job market and female participation in the parliament.

Tea data

Next we will lokk at the data set tea, and a subset of it. Below the dimensions, structures, summaries and a graphical visualisation of the six chosen variables.

data(tea)
dim(tea)
## [1] 300  36
str(tea)
## 'data.frame':    300 obs. of  36 variables:
##  $ breakfast       : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
##  $ tea.time        : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
##  $ evening         : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
##  $ lunch           : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
##  $ dinner          : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
##  $ always          : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
##  $ home            : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
##  $ work            : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
##  $ tearoom         : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
##  $ friends         : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
##  $ resto           : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
##  $ pub             : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
##  $ Tea             : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
##  $ How             : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
##  $ sugar           : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
##  $ how             : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ where           : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ price           : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
##  $ age             : int  39 45 47 23 48 21 37 36 40 37 ...
##  $ sex             : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
##  $ SPC             : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
##  $ Sport           : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
##  $ age_Q           : Factor w/ 5 levels "15-24","25-34",..: 3 4 4 1 4 1 3 3 3 3 ...
##  $ frequency       : Factor w/ 4 levels "1/day","1 to 2/week",..: 1 1 3 1 3 1 4 2 3 3 ...
##  $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
##  $ spirituality    : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
##  $ healthy         : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
##  $ diuretic        : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
##  $ friendliness    : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
##  $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
##  $ feminine        : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
##  $ sophisticated   : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
##  $ slimming        : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ exciting        : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
##  $ relaxing        : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
##  $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...
# choosing only a few variables
keep_columns <- c("Tea", "How", "how", "sugar", "where", "lunch")
tea_time <- dplyr::select(tea, one_of(keep_columns))
summary(tea_time)
##         Tea         How                      how           sugar    
##  black    : 74   alone:195   tea bag           :170   No.sugar:155  
##  Earl Grey:193   lemon: 33   tea bag+unpackaged: 94   sugar   :145  
##  green    : 33   milk : 63   unpackaged        : 36                 
##                  other:  9                                          
##                   where           lunch    
##  chain store         :192   lunch    : 44  
##  chain store+tea shop: 78   Not.lunch:256  
##  tea shop            : 30                  
## 
dim(tea_time)
## [1] 300   6
str(tea_time)
## 'data.frame':    300 obs. of  6 variables:
##  $ Tea  : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
##  $ How  : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
##  $ how  : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ sugar: Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
##  $ where: Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ lunch: Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
#visualize
gather(tea_time) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free")+ theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 8)) + geom_bar()
## Warning: attributes are not identical across measure variables;
## they will be dropped

# MCA MCA is used to detect patterns in data.

# multiple correspondence analysis
mca <- MCA(tea_time, graph = FALSE)

# summary of the model
summary(mca)
## 
## Call:
## MCA(X = tea_time, graph = FALSE) 
## 
## 
## Eigenvalues
##                        Dim.1   Dim.2   Dim.3   Dim.4   Dim.5   Dim.6
## Variance               0.279   0.261   0.219   0.189   0.177   0.156
## % of var.             15.238  14.232  11.964  10.333   9.667   8.519
## Cumulative % of var.  15.238  29.471  41.435  51.768  61.434  69.953
##                        Dim.7   Dim.8   Dim.9  Dim.10  Dim.11
## Variance               0.144   0.141   0.117   0.087   0.062
## % of var.              7.841   7.705   6.392   4.724   3.385
## Cumulative % of var.  77.794  85.500  91.891  96.615 100.000
## 
## Individuals (the 10 first)
##                       Dim.1    ctr   cos2    Dim.2    ctr   cos2    Dim.3
## 1                  | -0.298  0.106  0.086 | -0.328  0.137  0.105 | -0.327
## 2                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 3                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 4                  | -0.530  0.335  0.460 | -0.318  0.129  0.166 |  0.211
## 5                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 6                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 7                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 8                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 9                  |  0.143  0.024  0.012 |  0.871  0.969  0.435 | -0.067
## 10                 |  0.476  0.271  0.140 |  0.687  0.604  0.291 | -0.650
##                       ctr   cos2  
## 1                   0.163  0.104 |
## 2                   0.735  0.314 |
## 3                   0.062  0.069 |
## 4                   0.068  0.073 |
## 5                   0.062  0.069 |
## 6                   0.062  0.069 |
## 7                   0.062  0.069 |
## 8                   0.735  0.314 |
## 9                   0.007  0.003 |
## 10                  0.643  0.261 |
## 
## Categories (the 10 first)
##                        Dim.1     ctr    cos2  v.test     Dim.2     ctr
## black              |   0.473   3.288   0.073   4.677 |   0.094   0.139
## Earl Grey          |  -0.264   2.680   0.126  -6.137 |   0.123   0.626
## green              |   0.486   1.547   0.029   2.952 |  -0.933   6.111
## alone              |  -0.018   0.012   0.001  -0.418 |  -0.262   2.841
## lemon              |   0.669   2.938   0.055   4.068 |   0.531   1.979
## milk               |  -0.337   1.420   0.030  -3.002 |   0.272   0.990
## other              |   0.288   0.148   0.003   0.876 |   1.820   6.347
## tea bag            |  -0.608  12.499   0.483 -12.023 |  -0.351   4.459
## tea bag+unpackaged |   0.350   2.289   0.056   4.088 |   1.024  20.968
## unpackaged         |   1.958  27.432   0.523  12.499 |  -1.015   7.898
##                       cos2  v.test     Dim.3     ctr    cos2  v.test  
## black                0.003   0.929 |  -1.081  21.888   0.382 -10.692 |
## Earl Grey            0.027   2.867 |   0.433   9.160   0.338  10.053 |
## green                0.107  -5.669 |  -0.108   0.098   0.001  -0.659 |
## alone                0.127  -6.164 |  -0.113   0.627   0.024  -2.655 |
## lemon                0.035   3.226 |   1.329  14.771   0.218   8.081 |
## milk                 0.020   2.422 |   0.013   0.003   0.000   0.116 |
## other                0.102   5.534 |  -2.524  14.526   0.197  -7.676 |
## tea bag              0.161  -6.941 |  -0.065   0.183   0.006  -1.287 |
## tea bag+unpackaged   0.478  11.956 |   0.019   0.009   0.000   0.226 |
## unpackaged           0.141  -6.482 |   0.257   0.602   0.009   1.640 |
## 
## Categorical variables (eta2)
##                      Dim.1 Dim.2 Dim.3  
## Tea                | 0.126 0.108 0.410 |
## How                | 0.076 0.190 0.394 |
## how                | 0.708 0.522 0.010 |
## sugar              | 0.065 0.001 0.336 |
## where              | 0.702 0.681 0.055 |
## lunch              | 0.000 0.064 0.111 |
# visualize MCA
plot(mca, invisible=c("ind"), habillage="quali")